Email Interview: Ray Zone on Al Razutis - Stereoscopic 3D film art and computer desktop editing - publishing - August 9, 2010 Note: This un-edited material represents the complete e-mail transcript, and the speaking/writing subject provides unconstrained response. As is customary, it will be subject to editing by Ray Zone towards the purposes of publishing his book on this and other stereoscopic producers contained in the volume 3-DIY Stereoscopic Movie Making on an Indie Budget - by Ray Zone, Focal Press, 2012
- Final Chapter 23 excerpts: 'Al Razutis: A Complete 3D Artist'. Excerpts from the book on - Amazon books - with photos.
RZ: What gave you the idea to make your own 3D movie? AR: I grew up in LA in a movie & tv culture which featured rare
moments of '3D' in an otherwise '2D film world'. But the Hollywood culture
and media relevance vanished for me in the 60's as I plunged totally into
the 'underground' and experimental art and cultures of the 'anti-establishment'
60’s and which liberated me and my fellow artists from the overbearing
and commercialized slop that had become the norm. And that, of course,
allowed for all kinds of experiments with film, video, and multi-media
arts in 2D and 3D, and continues to this day. When I think about 3D movies in LA when I was a kid they never got me
going as far as wanting to make my own, because the subjects (typically
horror) and formats (typically big-screen or studio releases) were both
‘entertainment’ and inaccessible as to how I would get into making them.
When I first saw a ‘underground’ film (Andy Warhol’s Chelsea Girls mid
60’s) it got me going, since if he could make such a free-form artistic
and non-commercial film (and there were other underground filmmakers showing
in the 60’s), I could too. So I showed and made underground movies in
the late 60’s and started shooting stills and primitive computer graphics
in 3D in the 70’s and continued into 3D video and film ever since. My experience shows that what gets an independent filmmaker going, whether
in 2D or 3D are portable low cost cameras (like the Bolex 60’s, Toshiba
90’s, or like the recent 2010 Fuji W1 and W3), affordable editing (like
desktop editing and post today), and affordable non-theatrical showings
(on 3D TV or low cost gallery/theater projections). With low-cost portability,
everyone who is inspired by 3D can start making, distributing, and showing
their own works, and on their own terms. Money doesn’t have to determine
success. A 3D film culture is possible, as has been the case since the
creation of cinema. We had no readily accessible 3D Bolex adapters in the 60’s (they were
there, but really under the radar), so I experimented with 3D CG using
16mm cameras and intervalometers on a monochrome screen with 64k memory.
This was simultaneous with my experiments with making ‘3D movies’ featuring
individual holograms recorded on a cylindrical film gate with slit apertures
and projecting holographic images on silver parabolic screens so the image
(typically static) would ‘float in space’. Even if I had to work in ‘static’
formats I always saw it in ‘motion’. In the early 70's I also started making holograms (at a studio lab,
Visual Alchemy, which I built up for holography, film, video, audio and
film optical printing. And once again, this happened because the holographic
cameras were self-built, experimental, affordable, and the audience had
seen nothing like it and grew. So I have been making ‘3D movies’ in one
form or another throughout my entire adult life (40 years and counting).
In the late 70’s and through the 80's I continued to shoot experimental
movies and stills in 3D, was a film professor (at SFU, Burnaby, Canada)
and taught several students (Noel Archambault being a notable example)
the arts and crafts of 3D cinematography and editing and presentation
(on a hand-built and painted silver screen with polarized glasses and
dual-interlocked 16mm projectors). I also saw my personal 3D film interests
blocked by various ignorant or uncaring administrators (National Film
Board of Canada) but my day jog always saved something for film work.
In 1995 I discovered the portable Toshiba 3D camera when Gary Cullen
(holographer, friend and collector of 3D) showed me his Toshiba 3D cam
and I went ”Yes! Can I borrow it?”, and he let me have it free of charge
for extended times. With this camera and various external recording decks
(3/4" U-Matic, SVHS, 8mm), I shot both documentaries of holography and
holographic artists of the time (for a project titled 'West-Coast Artists
in Light'), in Vancouver and Los Angeles, and created a number of '3D
video art' shorts which I exhibited at 3D fests (LA SCSC) at the Louvre
in '97 and at premiere screenings in Vancouver, Portland, in '98. My ideas came from the compulsion (inherited from my avant-garde film
work) to re-create in 3D various 'worlds', whether they were derived from
photographic subjects (people, scenes, landscapes) or synthetic subjects
(like VRML 2.0 worlds and their textures, movements). I shot in analog
(field-alternating) NTSC 3D a variety of live-action subjects (mime artists,
landscapes, travel locations, meditational scenes, city and nature) and
at the same time was involved in creating 3D (monoscopic and stereoscopic)
‘worlds’ in VRML 2.0 and 97 at another location (Banff Media Centre, Alberta)
which supported art projects. So at this point in the late 1990’s I was working with both ‘passive’
3D, where I made 3D videos and projected them in film theaters and galleries
on a silver screen using two matched video projectors and Andrew Wood’s
‘de-multiplexer’, and I was experimenting with ‘interactive 3D’ building
VRML worlds with stereoscopic (anaglyph) texture maps on world objects
and viewing them (occasionally) in stereoscopic 3D on SGI Indy machines
and flicker glasses or VR headware. The ideas just kept coming, but the self-financed 3D movies would stall
then and now when my own finances became desperate. With the introduction
of desktop digital video editing in the 90’s and their improvements and
popularization in the years following, it was possible to lose that analog
editing bay (which really wasn’t as good as its predecessor the film editing
bay whether upright, or flatbed) and begin editing and posting analog,
then purely digital works for editing and output by computer. Along with
non-linear desktop editing of 3D video clips and subjects, the desktop
computer has now made authoring CG scenes in stereoscopic 3D a reality,
and has brought in a new era of 'filmmakers' who create 'interactive worlds'
in 3D games, or personal voyages. The fact that stereoscopic 'interactive' first-person shooter games are
a form of cinema and '3D film' should come as no surprise. In the ‘old
days’ the 1990’s we created virtual worlds where one could change your
‘size’ (like Alice), fly, walk, touch an object and play sounds or movies,
and the viewing of these worlds has been around since the late 90’s (nVidea
and Asus workstations in stereo 3D). We created speech-interactive VR with avatar humanoid characters in 2D
or 3D for the Mission Corporation (Bellevue, 2000-2001) where I was employed
in a ‘day job’ as manager of the 3D VR projects. This is when we demanded
fast processing, built our own set-top boxes to display VR and later saw
those similar standards implemented in the first X-Box releases. And it
all started with wire-frame boxes for me in 3D. It is clear that we have new forms of 3D cinema now, and many are in
the experimental phase. The first-person game narrative where the player
is navigating, reacting, and choosing 'which direction to go' and blowing
things up or interacting with virtual characters is a natural evolution
of the previous linear (beginning-middle-end) 3D movies, and certainly
a descendant of 3D VRML world navigation, interaction, however ‘primitive’
by today’s standards those worlds are. The future of 3D is both holographic and interactive and part of that
future has arrived already in 3D games as ‘your own movie’. My ideas for making 3D movies were, and continue to be, along the lines
of 'what can be done with motion picture stereo 3D that hasn't been done
in 2D film, or 3D film, or other medias?’ ‘How can we proceed to a visual
aesthetic and critical - interpretive appreciation of these works (a ‘knowledge’
of them) based on the spatial motion picture paradigm experienced as stereoscopic
3D or holographic 3D'? Now those are ideas full of implication not just
for me but for the 3D culture around me and we’re all contributing to
a future ‘art’ and ‘practice’ and ‘theory’ of 3D. We need a better sense of aesthetics, history, precedent, innovation,
and grammar/syntax for stereo 3D. All works created yesterday, today and
tomorrow contribute to the creation of such aesthetics and grammar. Some
styles and 'rules' become popular because the mass audiences like the
results better than others. Some rules arise when 3D film theorists will
investigate and make their assertions. 3D film language is an evolving
thing, and the forces that shape it will be economic, artistic, and theoretical.
No one discipline gets to rule the roost. This whole thing of 3D for me is like an 'adventure in creating ways
of seeing and interpreting space and movement', whether it is rendered
as stereoscopic 3D or holographically. Most of my ideas come from various
artistic stimulations (for example, the act of ‘seeing’ and totally being
‘changed’ by someone else's work in 3D) or sudden inspirations or surrealist
jokes that enter my mind. And a lot of it is influenced by the cultures
around me. It is not for the love of 'gee whiz, it's new!' or "gee whiz,
it's 3D!" SIDEBAR & ELABORATIONS & RESTATEMENTS & FOOTNOTES: AR: Photography and motion picture cinematography for me
are inherently 3D processes, even if in the early days of my work
I only realized them on a 'flat' screen. I interpret spatial scenes
(the world is 3D plus time, after all) to record in spatial terms
but conform to the display medium of that time, whether 2D or 3D,
whether still or in motion. Of course, the adaptation of a 3D world to 2D representation (classic
2D photography, film, video) is a well known, historically noted
and discussed phenomenon. Whether we are talking about cave paintings,
or Renaissance paintings, or modern art or 2D cinematography we
have ample examples of great works and great lessons of their time. Stereoscopic 3D takes a modern departure is with 3D film/video
and holographic imaging. It is from my early 70's interests in creating
and exhibiting holograms and my avant-garde films that my stereoscopic
3D of the 90’s began to feature specific experimental art interests
(the illusion of flat screen, exhibition context, layering of images,
enhanced transitional gestures, and compositional unorthodoxy and
'bending the rules' by using stroboscopic scenes) began to take
form. Additionally, my 3D movies in the 90’s involved getting a better
visual interpretation of the hologram and holographic processes
(and displays) and thus necessitated using stereoscopic 3D movie
(video) cameras. A stereoscopic 3D recording and playback of holographic
images in 3D space was immediately 'self explanatory': the hologram
and holographic image could be recorded and rendered as a 'physical
fact', without the required camera motion to display parallax, occlusion,
etc. by constantly moving the camera. This phase of stereoscopic 3D recording/playback (using the Toshiba
cam) also introduced me to field-sequential 3D recording, editing,
and playback technologies (flicker glasses or dual-projection (with
passive polarized glasses) presentations). The portable Toshiba
camera head (with fixed inter axial lenses, converging at approx.
12') was a simple and extremely useful tool and when cabled to a
external recording deck (3/4" U-matic, Super-VHS, etc.) produced
a recording result that exceeded the camera's internal VHS-C recorder.
Of course, the resolution of the Toshiba was limited by the small
CCD's that were employed at the time, and resulted in low resolution
(SVHS standard) analog recordings (a standard that has been exceeded
by HD DV) that retained many of the desired properties of photographic/cinematographic
qualities of the 3D scenes that were recorded. Even if limited in
photographic range (shutter, aperture, resolution) the portable
Toshiba head was preferable to a dual-motion picture camera system
which I had employed previously in both works and tests and teaching
- a bulky twin Eclair or Arriflex BL system on a horizontal rig
or separate tripods requiring precise alignment for each shot and
head & tail slates for sync. In 2010, we see similar strategies employed, where two HD camera
heads (e.g. Iconix) are output to an external storage device (HDD)
in synchronous manner. The resolution has gone way up, the idea
is pretty much the same. Concurrent with my 3D documentaries of holographics of 1995-1999,
I launched a series of stereoscopic shorts (1995 to 1999) which
I released separately and exhibited in festivals (SCSC) or public
showings. These shorts, which I called '3D video art' were released
under various titles: MEDITATIONS, VIRTUAL FLESH, VIRTUAL IMAGING,
NAGUAL, FRANCE'97, and were influenced by my previous work in experimental
and avant-garde cinema (over twenty years of work dating back to
1967), my work in holographics (dating back to 1973) and the art
and avant-garde film cultures of which I was a member and critical
and pedagogical contributor since the late 60's. As 'avant-garde'
works, they were pretty tame. The context of 3D film and video culture of the 90's was largely
dominated by hobbyists, clubs, and a few professionals. These works
were an attempt to accommodate both interests and as such have the
weaknesses and strengths inherent in such accommodations. Ideas
and inspirations were not hard to come by. They surrounded all aspects
of my multi-media endeavors and drew from a wellspring of historical,
contemporary works in all medias (sculpture, painting, film and
video, and holographics). And 3D movies in the 90’s were as exciting a proposition then
as in 2010 and beyond, now that we have portable and consumer friendly
3D cameras and desktop editing systems. In fact, for me, there can be no ‘rules' that limit the expression
to what is ‘proper’ or ‘allowed’. Sure, there are ways to create
3D that is 'comfortable to view' (no eyestrain, no nausea, with
a feeling of 'normality') but this comfort is simply an accommodation
to our human visual systems while delivering content that may suggest
'other ways of seeing' and experiencing the 'magic' (not the photorealism)
of stereoscopic 3D. An essay of mine that was published 'Notes on Holography' (Franklin
Press 1976) took issue with 'mimetic representation' of a 3D scene
as being paramount in the 'art of holography'. My antipathy to mimesis
(the imitation of reality) whether it be in my films, videos, photography
or holography I carried forward in my works in stereoscopic 3D video
'art'. I like the world of surrealist bumps and surprises, but I
have released some very ‘conservative’ 3D at times, much to my chagrin
and that of the audience. When I exhibited my 3D videos at the Louvre 3D film and video exhibition
in 1997, I was surprised by the negative reaction to some of the
works by experimental and avant garde film makers in attendance
who accused me of being too 'conservative' in my aesthetics. Of
course, these younger filmmakers had only seen my previous avant-garde
films and were expecting ‘radical 3D’ from me. My mistake was that
a general audience was rather naïve about 3D and should be brought
into it ‘gently’. ) That filmmaker criticism stung, and made me
abandon the pedagogical method (the primitive cinema technique of
fixed camera position and long takes) I used in some exhibited works
and provided the impetus for more radical techniques (wild camera
motions, image layering, color quantizing, blurring, time delay
echoing, use of strobe in shots, angle and pov extremes) that I
began immediately after the Louvre show and which resulted in several
works (France 97, Nagual) that featured these departures. I think
we’re all susceptible, and responsible to, the cultures of our times
and those influences result in an interesting balancing act for
the highly personal 3D movie maker. We have the ongoing legacy of the historical cinemas as well: the
Lumiere cinemas of ‘realism’ and long takes competing/interacting
with the cinemas of ‘surrealism’ and Melies magic in editing and
post. And contemporary to both, there is the ‘film avant-garde’
and experimental cinemas of yesterday, today, and tomorrow. I am continually heartened by a belief (and experimentation) that
stereoscopic 3D is simply a step towards holographic recording /
exhibiting of full-color motion-picture with sound holographic 3D
TV and theater forms of the future. The adventure will not end with
stereo 3D. RZ: How did you choose your subject or story for your 3D movie? AR: The subjects chosen were usually based on my art activities at the
time. If I was involved in early computer animation, teaching, experimental
and avant-garde film, or documentaries on holography, then these subjects
were either incorporated as the subjects for 3D movies or influenced the
subjects chosen for 3D movies. If I am shooting a hologram or holographic
studio then I really don't need to 'move the camera' to show that the
holographic image is in fact 3D. So, a lot of static compositions are
used and I direct that the action inside the stereoscopic window and 'space'
provide the defining movements and definitions of that space. This was
extremely interesting when shooting holographic image projections (through
the 'window') and resulted also in a number of 3D movie collaborations
I did with Dean Fogal (Tuba Physical Theater, Vancouver) where we used
a mime artist to tell a simply story and define the space and narrative
elements of his story in a stereoscopic 'space' using the Toshiba 3D camera
and portable recorders. These were extremely low budget shoots either on location or in studio,
shot like a Lumiere movie with few edits, and they proved very unpopular
to the experimental film crowd in Europe. Other subjects were chosen for
their value in depicting transformation in cities and nature or in the
human body and human actions. Therefore in France 97 (1997) I would use
superimpositions in 3D or long transitions in 3D to compare forms and
movement, or I would compare movement and stasis in 'Statues' (1997) which
I also shot in Paris, or I would use movement against the 'flat screen'
in Virtual Flesh' (1996) where naked bodies of various ages, sizes, including
pregnant, would morph into each other (via post) and create sculptures
of the human body 'in motion' and in 3D as gallery projection or installation
piece. I am prone to the Surrealist compulsion to explore the 'collision of
objects' (and meanings) producing new collisions and meanings in my and
the viewer’s mind. I explore the possibility of different subject matter
to interact in meaning, metaphor, and space, in both my holographic works
and my stereoscopic 3D works. This is not only a surrealist impulse me
but also an impulse to 'comment' on the medias themselves, and comment
on their limitations or (in the social art realm) their potential for
hyperbole ('magic', 'vision', the 'new', the 'awesome', the 'never been
seen before'); at times the ‘academic’ or ‘critical’ personality inserts
itself to criticize, ridicule or reflect against both my art culture (stereo
3D or holographic) and my own posturings (as 'artist', 'innovator', 'adventurer').
Aside from the subjective reasons we all have for creating works or imagining
possibilities, my subjects can be chosen by circumstance (can I afford
to shoot my idea? am I traveling? is there anything interesting to shoot?
do I need a permit?) and by culture (What did they think of my last works?
What are people interested in? Is it a showing to people that know 3D
or someone totally new to the medium? Is it to a 'art audience' or a 'commercial
audience'?). RZ: Why did you decide to use the camera/capture tools that you did
to make your 3D movie? AR: When I could get my hands on a camera or capture tool I went for
it. As an independent, sometimes employed at the university, sometimes
unemployed and freelancing artist, circumstances varied. Camera/capture
tools were acquired based on availability at the time (motion picture
cameras that were available for dual camera rig shoots, Toshiba stereo
camera available from Gary Cullen for documentary on holographic artists,
dual DV camcorders for mobile shoots, NuView stereo adapter for experimental
film shoots, dual HD cams for recent shoots rented from JR & Stereoscope
for experimental 3D HD shoots in 2010. Good tools make it easier, not necessarily ‘better’. The favorite saying
that the ’typewriter does not create a good novel’ means that just having
high-end equipment and big budgets doesn’t guarantee a good 3D movie (tell
that to Hollywood…). But when you can’t get the right tools to serve the
vision, the compromises either overwhelm you or you wait for a better
opportunity to create. Ultimately, it is the audience, whether your self, friends, club audience,
art audience, multiplex 3D theater audience, that will determine that
success or failure that all of us must bear and move forward from. There is a ‘luxury’, believe it or not, from using primitive and hand-made
tools: it is the luxury of ‘thinking, evaluating, imagining’ that comes
with ‘slow’ technology. I learned this when I would optically print, one
frame at a time, my experimental films and had the luxury during those
slow times to imagine the next film, and the next. High resolution 3D and pixel counting are not an 'aesthetic' except when
one is solely pursuing high definition ortho-stereoscopic photo 'realism'
or a spectacular 'magical realism' of CG and/or live-action that is 'rich'
with textures and technical details. It's a matter of aesthetic preference, budget, and film culture that
determines whether we go HD, holographic, or work in SD and other formats
like the web. This also applies to whether you shoot alone, solo, or in
small groups (friends, collaborators) or shoot with a big crew with insured
expensive camera rigs and the latest studio environments, or distant locations
requiring more permits. Some limitations, such as shooting solo, can open up choices, and 'the
aesthetic work' that is required to imagine alternatives and possibilities.
Some absence of script brings on improvisation and no need for hierarchical
crews and scheduled continuities to implement. A solo production schedule
is always changeable, adaptable to all kinds of things and is not slave
to a contract. The 1/30th rule is not an aesthetic, but nor should it be disregarded.
There are many creative ways to make 3D movies, big and small. RZ: How did you synchronize the two cameras? How did it work? AR: Motion picture 16mm cameras (70's - 80's) shooting at sync sound
speed were head and tail slated for editing; Toshiba stereo camera and
NuView adapter with camcorder 90’s were field-sequential internally synced
L-R views requiring no external synchronization; HD DV cams from JR (2010)
were synced using the LAN Shepherd sync. RZ: Did you use a beam splitter? Or side by side? AR: So far I’ve only used side by side, or dual lens camera rigs. I avoid
beam-splitter rigs due to their optical fragility, and large size which
limits mobility and increases set up time. Why use a large beam splitter
rig requiring assistants and constant cleaning, calibration, when one
can benefit by simpler 3D rigs? Only in close-ups and special applications,
and bigger budgets, does a beam splitter rig make sense to me. RZ: How much did cameras or beamsplitter or other devices cost? AR: Don't want to get into a review of costs. Cameras that I could afford
to rent, buy, or borrow were either consumer or prosumer cams. Beamsplitter
rigs are typically too expensive for an artist to afford or rent without
a client covering costs. RZ: Where did you get them? (online, swapmeet, etc.) AR: In the 70's I used whatever 3D technology I could buy, borrow, or
build, since my work was primarily as independent artist and shoe-string
budgets shooting on motion picture film (with Bolex, Éclair, Arriflex
16mm cameras). When I was teaching at the univ.(80's) I used university a-v 16mm film
equipment; when shooting independently in the 90's, I borrowed consumer
3D cameras (Toshiba from Gary Cullen) or purchased (Nu View, dv cams)
from suppliers located on-line and built my side by side rigs for twin
cams accordingly. Today the 3D camera market has all kinds of consumer (Fuji W1, W3, and
many others) cameras, editing software and computers. I always check on-line
at E-Bay before going shopping. RZ: Please describe the camera, model number, capacity, lenses, focal
length, exposure settings you used? AR: I’ll pass on a lengthy recitation, because my memory is already taxed,
and simply extract some items: 3D motion picture: Arriflex BL, Eclair
cameras with fixed or zoom lenses, 400' magazines, aperture set traditionally
using light meters appropriate to the scene. 3D video: Toshiba camera
head, with fixed focus and focal length, Sony and Panasonic consumer
cameras with manual/auto focus, zoom, and with NuView adapter; twin camera
rigs with Sony and Panasonic prosumer cameras with independent zoom lenses,
manual/auto focus, the standard stuff. HD DV twin Sony Ux-8 cams (interlocked
zooms, manual/auto aperture settings,) on JR rig using LANC shepherd. RZ: What new materials or techniques did you make to create a 3D movie?
AR: With an art and avant-garde film background, many of my materials,
subjects, and techniques harbor back to my own works or the avant-garde
works of others, and involve experimentation with all aspects of production
and editing/post. Early on, I was interested in the 'virtual body' and performance art,
so I teamed up with a mime artist (Dean Fogal, a student of Marcel Marceau)
and did a series of mime works in 3D that mapped out the space and 'narrative'
of the performance in 3D. This interest in virtual bodies manifested itself
in my 3D video ‘Virtual Flesh’ when I took 12 nude people of various ages
and sex and created a moving sculpture in 3D video dissolving and superimposing
one figure upon the other as they thrust themselves out of the screen
in 3D projected on large screens, as they created mutations of human figures
and bi-figures in space that were impossible in sculpture and sometimes
achievable in holography. And this fascination with the virtual body has resulted in some holographic
works (Surrogate 1974 and Surrogate Dressed for Art New Vogue 1984) and
has impacted my work on 3D virtual reality and avatars since 1997 and
onwards. My fixed camera ‘primitive phase’ (after Lumiere) was followed by a ‘imaginarium’
phase (after Melies) where after my 1997 Louvre show, which was soundly
criticized by a few avant-garde filmmakers, but generally liked by most,
I went into the imaginarium of shooting strange angles, moving, strobing,
layering images, and playing with time echoes and image ghosting. I responded
well to the criticism and explored all kinds of For a while I got quite
fascinated with time delay (echoes of delayed movement) and movement versus
stereo 3D 'concreteness' and this fascination is most evident in ‘Statues’
where everything that moves in the scene features time-delay ‘echoes’
and everything that is still in the scene (the mime artist) has a 3D concreteness
that is set off from the time-delay induced ‘flatness’ of moving subjects.
This in essence, was an example of ‘violating’ the rules of 3D movie making
at the time, and I loved it. I also melted moving scenes into ‘melting’ color scenes (utilizing switcher
time-delay fx and simple colorization/quantizing) in France ’97 when shooting
through a high-speed train window. During that precise shoot, I thought
about what the avant-garde film guys had said to me about being too ‘conservative’,
and I imagined, as I shot, that the landscapes would be subject to ‘impressionist
technique’ because, very simply, I was passing through Impressionist countryside
where the Impressionists had painted, at the very same time. I think most audiences didn’t get that at the time (1998) when it was
shown in LA and Vancouver and Portland, since in that era 3D movies were
either old Hitchcock and Hollywood horror, and this ‘3D video art’ as
I called it was as bizarre to them at the time, as was my ‘Visual Alchemy’
traveling holography exhibition in 1977. But this was not client-driven
work financed by a client, so I enjoyed every moment and went on from
there. And history will be the judge or not, and the work is continuing. I also was fascinated with the 'installation' qualities of 3D where
a 3D image would be projected on a flat (silver) screen and its image
would essentially occupy a 'volume' within the aperture of that screen
with some parts projecting in front, some parts behind. This (site-specific)
installation art interest also informed my short 3D movie 'Virtual Flesh'
(1996) which featured superimpositions and dissolves of a dozen male and
female bodies, merging and mutating in a space that was largely a extension
into the audience of the ‘screen’ itself (shot behind flexible dental
dam) where the people subjects were pushing to suggest that the theater
itself was 'breathing' and 'pushing towards the audience'. This work, though released for home 3D TV viewing, and projected on
a very large screen at the Louvre (1997) was intended to be a live gallery
installation where a viewer (wearing 3D headgear) could interact with
the projection in real-time mix (using touch sensors and video mixer of
viewer and projected subjects). It was never realized as a gallery installation.
However, I note that some years later, in 2006, Cirque du Soleil in Las
Vegas had a show advertised with precisely the same type of camera shooting
arrangement. I never saw the show but assume that the creators had the
‘virtual body’ and ‘screen’ similarly in mind. My present 2010 work in 3D HD is a matter of discovering new subjects
and techniques suitable for digital 3D HD presentations, whether it be
at home on their 3DTVs or in a theater in RealD. My current interests
in many subjects and artistic traditions continue. I’ m shooting holograms on 1080p HD 3D, and I’m shooting hats floating
on water as if in the clouds. That kind of stuff plus dramatic and interactive
media compels my interests. And it all has to be in 3D from now on. Whatever
I come up will be displayed as the works are released beginning in October
2010. RZ: Why did you make those choices? AR: My 2D and 3D cinematic approach has been informed by the works and
circumstances of their production, continuing from 1967 with underground
films, documentaries, experimental and avant garde films, and continuing
with teaching film and film studies/criticism at the university in the
70’s-80’s. The choices were sometimes influenced by the nature of the exhibition
(a festival, fellow 3D enthusiasts, artists, gallery), and were certainly
influenced by the film and video cultures of those decades. As a filmmaker,
you present your work to a live audience and hear their reaction during
and after. You read what critics have to say. As a teacher, you taylor
your work to the student’s ambitions, and work with them in a manner of
mutual discovery. My choices also depend on whether I am working solo, or in an institution,
or if the work serves to satisfy a client’s interests. When I shoot a
3D film I do so not only as an observer but as a ‘participant’, imagining
while contemplating and rendering the subject. I edit as I shoot, since editing is also selection, juxtaposition, montage
(read Eisenstein and many others on the subject). Uually I edit intuitively,
or based on many years of ‘feeling’ the subject in time, knowing where
cuts and transitions must occur, for the subject, for the purpose. I would
call that an 'editor's brain' - a partially functioning (at times) photographic
memory - and since I have edited film, analog and digital video for over
forty years, I resort to a constant re-evaluation of the edited content
based on the 'whole', not just a sequence of segmented scenes. And many
times this editing along timelines, common to both earlier film (using
Moviola uprights or Steenbeck flatbeds), and analog and digital video
editing, can be stored and re-imagined in the head, while shooting or
editing or doing other things. Multi-tasking has been around for a long time and that is what cinematographers
and editors (as well as directors) have in common. My 90's choices were
made due to locations where I traveled (Saturna Island, holography labs
in Vancouver, Los Angeles, locations in Paris, Marseille, Nantes, Grenoble
France, at my residence in Los Zacatitos, Baja, Mexico, and the subject
matter that I was interested in. Choices were also influenced by me prior
works in film and holography, as well as the experimental and avant-garde
film cultures that I hung out with. Choices were not typically (unless working for a client) made by others,
or due to random circumstances, or financial incentive or prescription
by the State (as in communist places), religious authority (as in theocratic
states) or academic requirements (I was unemployed except for 9 years
as academic, and mostly conducting independent free-lancing work during
most of my previous decades of work). The subject matter is sometimes related to explicit art practice (performance
art, sculpture, holographics, virtual reality, mime, impressionism, surrealism),
or sometimes is a document of an artist and his practice (Dean Fogal,
Tuba Physical Theatre, holographic artists). Many times the choices occur
spontaneously, as in when selecting a location shot, time of day, lighting,
angle and framing (France 97, Nagual, 2010 3D HD works). If I use actors they are typically artists or creative friends who work
unpaid and for the love of the experiment. They are fellow creators in
a process I liken to 'mutual enchantment'. I primarily like work in, and
make choices based on, an artistic environment with people outside of
the 'film industry'. Typically, I ask the actor, if I am using one, to
interpret and improvise with me. The choice of subject matter, if pre-determined
or based on location and documentary circumstances, comes from imagined
events and can result in script or synopsis and be subject to further
interpretation and improvisation as the shoot progresses. What I really like and marvel at in 3D: An image 'floating' in space,
as is well known in holographics, is a collective dream that I think we
all share and are fascinated by. Weightless, floating, disembodied, phantasms
have fascinated us all for centuries. If you ask people today what they
think a hologram is and they will say 'image in space' (and confuse the
idea with its simulations in Star Wars or other sci-fi flicks). This is
an image-idea that may conform with our ‘dream states’ and exhibit characteristics
of freedom from physical confines (such as a screen, window/aperture,
or semi-visible 'volume screen's with image projected on mist or grid,
as in theater). That image in space phenomenon – and the fact that it could be turned
inside out - got me going in holography; the image projecting from the
screen or bisected by the screen got me going in stereoscopic 3D. When
the stereo window vanishes, through composition, lighting, ambient contrast
and color, the stereo 3D image is in 'space' and fascinating. I’m not a ‘photorealist’ or that interested in realistic ortho-stereoscopic
recording and display. To repeat, or try to repeat in stereoscopic 3D
film what my eyes see in a ‘natural scene’ seems to me quite boring and
pointless, unless it serves a narrative function in the film I am making,
or has a place in the commentary that is stereoscopic 3D film art. But then there is Alice and my webseries on ‘The Adventures of Alice’
which portrays, pokes fun at, totally takes issue with, all kinds of 3D
subjects and styles, anaglyph included, and implicates subjects shared
in both stereo 3D and holography: ‘cardboarding’, holographic ‘light’ and
‘coherence’, film theorists and historians, and the respective 3D and
holographic cultures themselves. You can see this ’taking issue with’ and commentary aspect of my work
in my late 80’s holography (‘Young Romantics’ Playpen’, ‘Nose Cone’, ‘Giving
Head to Science and Technology’) and in the 90’s 3D video art (‘Virtual
Imaging’) in which Alice undergoes all kinds of media ‘perversions’ and
narcissism extracted from Hollywood films about ‘VR’ and ‘God’. My 1980’s and 90’s writings on holography (published in Wavefront Magazine,
which I edited) parallel these more overtly political and critical ‘take
issue works’, all the while they included semiotics and communication
theory to present their elaborate analysis (presented at two conferences
on Holography). You could say that my ‘academic’ part had grown quite
large in proportion to the others, and I finally left that part behind
in when I resigned my tenured position at Simon Fraser University in 1987.
And sometimes my choices come from dreams, sudden inspirations, an image
I see in front of me or on the web, or sometimes they come from plain
irrationality, or surrealist joke and automatic writing that my mind will
play on me. I could cite numerous examples through the years where I investigated
‘dreamtime’ and derived works from that, where my interests in alchemy
and psycho-active creation methods gave me ideas for further works, or
how Castaneda’s books on Don Juan’s life voyages influenced my creations
in Baja Mexico and the 3D video ‘Nagual’ (1999), and where my friends
and culture intersected to produce new ideas, that continue today. But
I’ll leave it at that for now. RZ: What special or different techniques did you use to make 3D? AR: I think I have covered that in the above responses when I addressed
the experimental techniques employed in my works. RZ: How do you rate the cameras/software that you used in terms of performance? AR: Everything I have used I consider ‘substandard’, yet necessary and
acceptable for the task at hand. Otherwise I wouldn’t have made and released
the works. But what was substandard before, becomes ‘standard’ later through
improvements to cameras and software. For example, the Fuji W1 of today
is a far superior 3D camera than the Toshiba 3D camera of the 90’s. You can’t just wait for technology to be improved (or have a ‘crisis’)
if you want to create with the (substandard) tools of the time. You have
to create with the tools available and exhibit your works to a contemporary
audience, not wait for some ‘future’ audience. We look back on SD 3D of the analog era and say ‘low resolution, horrible’,
but when the future looks back on our own HD 3D era, they may say the
same. Good art is not measured by resolution or pixel count, and great
novels aren’t written by typewriters of word processors. Creativity within our limitations produces the condition of exceeding
those limitations through, dare I say it, ‘imagination’ and the ‘power
of suggestion’ that is fundamental to all the arts. RZ: Were you able to do everything you wanted with making the 3D that
you wanted? AR: No, never everything, always something begetting something more.
My multi-media background in and around 3D suggests an inquiring mind
ready for ‘more’. And dissatisfaction with each project only comes after
conclusion, and only because the work has opened up new ‘possibilities’
for more works. I don’t believe in ‘perfection’ (the perfect shape, the perfect work)
and no single work is satisfactory to the point that no further works
need be contemplated. It is both out of restlessness, adventure, and feeling good about accomplishments
(that feeling is irreplaceable, and not to be confused with pride) that
I keep producing new works with new technologies. It’s like why I love surfing all my adult life: in a creative environment
the next wave is a surprise and the act of taking off on that wave is
always, always, a new experience requiring a sense of adventure and guts.
Yet, if I was able to do everything I wanted, it would first of all require
that it be done in in ‘holographic full-color motion pictures projected
in theaters and homes world wide’. And then I would ask for ‘all the time
in the world’ to do it. RZ: What would you have liked to have done that you couldn’t do when
you made your 3D movie? AR: A lot of what I wrote above and more of it. RZ: How did you edit and assemble your 3D movie? The editing in analog
days was all tape to tape, so I will fast forward to digital desktop and
notebook editing, post 1999. From capture to encoding, to editing, to compositing, to CG, to FX, to
output and playback, the digital era has allowed me to travel with computers
not playback decks, switchers, proc amps and gen lock boards, and that
is an amazing change that all of us I think appreciate. My first professional home workstation PC was a Widows 1 machine (DOS
3), monochrome display, 256k memory and 20 megs of data, if I recall,
and shortly there afterward the PC evolution took off. I don’t use Macs
so my software is everything and anything that will run on a PC. I’ve used Adobe Premiere and After Effects for years in all kinds of
versions, plus Photoshop and related programs, Flash, Virtual World building
programs, and 3D Max, Vue, and Daz, and on it goes with Stereo Movie Maker
and Player and onwards towards 3D BluRay and 3DTV formats. I haven’t assembled a 3D movie on tape since 1999, and I’m still contemplating
the future ramifications when holographic 3D movies are affordable for
the home work station. But that might take some time, so I’ll relax for
now. RZ: What computer(s)? AR: I work strictly with PC's, and in the past have worked with SGI machines
on VR projects. RZ: What software? AR: Adobe Premiere, Adobe After Effects, Photoshop, Illustrator, Stereo
Movie Maker, Poser, 3D Studio, Flash, Daz3D, Cortona, Vue d'Esprit, Cosmo
Worlds, and all the Stereo Movie Player free software that has been out
there for a while. The list is constantly growing in terms of number of
preferred encoders, decoders, and stereo plug ins for updated Premiere
Pro software and modeling systems, that I cannot really say anything beyond
‘it keeps growing’ and changing for what we need.. RZ: What special techniques did you use (a) for shooting/computer generating
and (b) for editing? AR: In the past, it was all dependent on the shooting and editing technologies
available at the time. I started, way back in the 70's with a single bolex
with intervalometer looking at monochrome (lines / frames, no textures)
output from PDP1138 generation computers; in the 90's I used 3d Max and
stereoscopic plug ins (I think it was a Vrex for anaglyph then), and After
Effects and had to output to tape and view in anaglyph. In digital today
I simply render out left eye and right eye views and subject these files
to encoding, editing, post fx and output as HD over/under or any format
suitable for intended viewing (projected? 3DTV?). RAZ; How did you look at it in 3D while you were shooting? AR: In the 70's we didn't use real-monitoring of 3D, just made an educated
'guess' and shot it on film, processed and printed overnight, then viewed
24 plus hours later on a dual 16mm film projector (interlocked) and silver
screen with passive polarized glasses. I first used a field monitor way back in the analog 90's, viewing the
shot in 3D using Using Virtual I-0 headmounted display (by way of feed
from composite analog video), or if that was too cumbersome by simply
using a monitor showing the edge displacement and ‘reading’ the (parallax)
information to set the shots. Presently there are many techniques to use for real-time viewing while
editing. For my recent 1020 LA shoots I was building a beam-splitter 3D
video viewing monitor with two 15” LCD monitors, HDMI and analog inputs,
for passive polarized glasses real-time viewing on location or in studio.
However, I didn’t finish it, so tried making up a digital to analog re-scan
and convert unit for my Virtual I-O glasses for 3D monitoring. The results
were so bad that I abandoned it and simply went with my fall-back technique
which was to ‘read’ the 3D scene in terms of horizontal edge displacement
and go with experience. Any problems I dealt with in post by using floating windows or window
adjustments. In the future, I will of course opt for having a 3D field
monitor with me at all times. The possibilities available to me are either
a beam-splitter 3D monitor or headgear that delivers HD quality monitoring,
or 3DTV with active glasses systems, even if I have to rent it. Real time monitoring is essential to 3D shooting and the better the monitor
the better you know what you have and what you can do with it. (I suppose
I could always employ a autostereo monitor with proper feeds and have
yet to try my 15” autostereo Sharp with DDD software in that respect but
sometimes too much home cooking becomes tiresome.) For Fuji W1 shoots, which I conduct either in tandem (as reference) with
HD 3D shoots, or independently for short subject shoots and experiments,
the viewing is made simple with a autostereo 3D LCD that is both useful
and economical. The W3 has an improved 720p HD resolution and the autostereoscopic
LCD is brighter and bigger. And all of this is available now for 500 bucks.
The options are wide and available. RZ: How did you look at it in 3D while you were editing? AR: Presently there are many techniques to use for real-time viewing
while editing and I use anaglyph or autostereoscopic. I use anaglyph when
logging, assembling, and editing on my PC workstation viewed on large
screen (mine is 32” for the workstation) LCD. Autostereoscopic viewing
is used to test results. But I’m investigating obtaining and using a 3DTV
with active glasses for all aspects of future editing and viewing/presentation,
when I can afford it. I don’t particularly like editing in anaglyph mode due to ‘color crosstalk’
in my brain which fatigues me a lot more than the old anaglyph 3D headgear
or glasses that I used in all editing of that time. SIDEBAR ON PAST SYTEMS: AR: In the 70's there were no flicker glasses so it was all a
24 hr wait and dual projection of film with silver screen and passive
polarized glasses. Using LC flicker glasses in the 90's and interlaced
NTSC monitor for real-time 3D viewing, then NVideo 3D video cards
in 1999-2000 (with LCS glasses), then Virtual I-0 head-mounted display
in 2004-6 directly connected to video card on PC or analog interlaced
video output from record deck or switcher, then in 2009-2010 using
Stereo Movie Maker in Anaglyph mode. With regards computer graphics, my ideal system is active glasses
and 3DTV. But that is just the most recent in a process that began
for me in creating 1997 VRML worlds which were then viewed with
low frequency flicker glasses and analog video output (field sequential
NTSC). Since computer graphics for me are largely 'resolution independent'
(except for number of vertices) and can be viewed on any (past or
present) 3D video system , including HD 3DTV, the issue becomes
rather moot for CG and where it is headed. What resolution did you
(a) shoot (b) edit and (c) finish at? Work previous to 1990's was always shot, edited and shown in 16mm
film which resolves around 160-200 lines per recording millimeter
and projects around the same. This film work was projected using
dual interlocked 16mm projectors on large (typically 12ft by 8ft)
hand made silver screens (of plywood and painted aluminum); in the
analog 90's I shot, edited, and projected/viewed in NTSC standard
definition (640x480 interlaced, or converted to digital at 720 x
480 NTSC .AVI. these were projected ('finished') at NTSC resolution,
converted to mpeg DVDs, and dual-system (via Andrew Woods de-multiplexer)
projected on silver screens as large as 18' x 12' (Louvre 1997)
using Barco projectors and passive polarized glasses. It actually looked 'pretty good' for the times, at the time. Work beginning in 2010 has been shot and edited in 1080p , as well
as 720p and 480i (Fuji) and is to be presented in Oct. 2010 and
beyond on Plasma 3DTV HD. Another thing about resolution: high resolution is the fanfare
of HD whether it be 1080p or 8k. Well, the resolution of a hologram (the film resolution) is typically
3k lines per millimeter squared. So at that resolution the hologram
is in another league (what is the total resolution of a 30 cm by
40 cm film plate hologram at 3K per mm?) So, I am pretty easy about 'resolution', knowing full well we're
headed upwards and upwards. And knowing that some really great 3D
films, videos, and CG has been made at ‘lower resolution’. RZ: If special visual effects were involved (CG) how did you build? AR: Presently in 2010, I don't use CG (computer generated graphics) much,
if at all. This will change since I do work in 3D modeling programs (utilizing
Poser, 3D Max/3DS) graphic programs like Photoshop, Corel Paint Show,
and Illustrator, and scene builders such as Vue d'Esprit and Daz3D, and
of course Flash. My present and future works will feature more 3D scenes created by CG
to function as digitally constructed backdrops for compositing with foreground
actors and action. In the 1990's, when I created a number of VR (virtual
reality) worlds using VRML 2 and VRML 97, these worlds (scenes) were displayed
on Silicon Graphics (Indy or Onyx) machines in stereo 3D and viewed with
LC shutter glasses on available monitors. In this era, due to relatively
(to today)) low computational power, these movie-mapped 3D objects and
worlds would sputter and jam when playing the mpegs (movie maps) and sound.
Today is a different story, and I’m looking towards stereoscopic home
workstation 3D in all the CG jobs that I can land (speech-interactive
avatars being the present focus) and promote the client’s interest in
publishing the work in stereo 3D for the new stereo 3D viewing systems. RZ: How did you combine CG and live action? AR: I shoot with green screen and do compositing in Adobe After Effects
or Premiere. This is a pretty straight forward and known technique and
really depends on budget and lighting. The most recent green screen shots
that we did in 2010 in LA used available and borrowed screen material
and lighting kits. It’s a minor interest for me right now. RZ: What special techniques/software did you use to do this? AR: I have used known film and video composite techniques, arrived at
over decades by others in their works, and I use the available software,
cited above. RZ:How did you decide to measure parallax values for 3D? AR: I typically eye-balled it during setting up the shot and determined
where the window would be relative to foreground and how deep the stereo
scene should be for the shot. In other words, I am prone to first construct
the scene or arrange the scene for the camera and then make parallax adjustments
(window placement) afterwards. I like utilizing varying convergences when I have control over background
and depth of stereo scene. I also prefer engaging with the window, with
images at time protruding through the window into the audience space.
This is a matter of stereo aesthetics that is highly personal to me and
not grounded in ‘realism’ or ortho-stereo intentions. Having concluded that 'the 1/30-rule is not an aesthetic' I am interested
in a dynamic space that is neither completely 'real' nor completely 'synthetic'
in appearance and sometimes use on-screen transformations/transitions
that point out the artificial nature of the 'space' in view. At times ‘cardboarding’ can be interesting to me, and of course that
results from a deliberate lens focal length setting, lighting, and convergence.
It seems to me that stereoscopic 3D is neither 'natural' nor 'magical',
but a blend of the two, because it can refer to both worlds, the one in
binocular perception of nature and the world of dreams and impossibilities. I don’t buy the ‘reality’ of stereoscopic 3D just like I couldn’t buy
the ‘fidelity’ of holographic mimesis. I like that it can do both, in
all kinds of interesting interpretations and expressions. I'm interested
in both the near space in front of the zero parallax window and the virtual
space behind it (backgrounds). Projecting an image 'in front of the screen' is no 3D gimmick to me.
We have been doing that in holography for years (image plane holograms,
pseudoscopic real image holograms, holograms using concave (pseudoscopic)
mold subject and plate inverted to project the image only out of the plate
in orthoscopic view, holographic projections using parabolic screen to
'float' an image in space between the screen and viewer). The subject of images projecting 'in front of' or 'behind' the screen
(or holographic plate) has been one of the ongoing subjects in my works
beginning in the 1970's. With stereo 3D one has choices of 'where to put
the image and scene' relative to the stereo window and viewing/projection
screen. I know that currently there is a ‘trend’ to avoid images projecting out
of the screen (towards viewer) except for brief moments of appearance
or motion, ‘effects’, and that use of these negative parallax images are
considered 'gimmicky' and 'old school', or ‘artsy’. Well, that’s all and
good if you’re doing a 3 hour feature, because negative parallax can get
tiring, and we all know that(!). Since I've always acted as the stereo cinematographer and stereographer
on my productions, I am flexible with all parallax issues especially as
they concern the subject matter and what I want it to ‘look like’ in stereoscopic
3D. What I like and what other people like will have to be put to the
audience test, and will depend on what ‘type’ of audience we are dealing
with. RZ: A lot of 3D? Or just a little 3D? AR: I do a lot of 3D in short spans of time, in various forms (video,
VR, holographics) and then there are pauses, sometimes lasting years.
This results in bursts of short subjects, and the pauses are typically
related to financial inability or lack of funded projects, or competing
projects for clients. When I resume, it will typically feature a different approach to the
subject matter and technology, since by that time the technologies and
standards have changed. RZ: Why? AR: I continually overlap projects in different medias (3D film, holography,
writing and project development, sculpture, web design, speech-interactive
avatar design and production, 3D experiments with photography and stereo
film (Toshiba, W1), and all of these areas require time and concentration.
These overlaps and different works are due to the nature of being a independent
free-lancing artist. You have to work where the work is sometimes, so if I have to abandon
stereo 3D in favor of a paying gig for avatar animation, then that’s what
happens. RZ: What storage medium did you use to save/deliver 3D movie while working
on it? External harddrive? DVD? Thumb drive? Shuttle? Other? AR: When shooting analog 3D video I stored on 3/$" U-matic, SuperVHS,
digital 8, VHS. When shooting SD and HD 3D recently, I stored on 32 gig
SD flash memory; when creating 3D CG I use the internal hard drives on
my PC. I haven't had to use external HDD yet because I shoot short subjects,
but obviously I intend to make longer subjects and will use any of the
digital storage technology that is out there and affordable to me. RZ: How have you formatted your 3D movie for editing and projection?
Side-by-side? Over/under? Separate Left/Right? AVI? Tiff? Targa? AR: I edit one channel and conform the second to the first for output.
Editing is done in AVI with compression dependent on the capabilities
of my work stations (which change and are currenly quad core and HD).
Lower compression or uncompressed is best and it is a matter of ‘storage
space’ on hard drives that can determine the format sometimes. In present digital editing, I import the 3D material (whether it be separate
channel AVI or other camera formats with .mpo files, and evaluate each
shot using Stereo Movie Maker and Player. After evaluation, I edit a single
channel and conform the second to the first for multiplexing or encoded
(to the requirements of the 3D TV – checkerboard or side by side or over
and under) format. In output/projection I have used separate left/right for dual projectors
(interlaced 3D on DV tape or DVD (mpeg2) is demultiplexed and sent to
the projectors. In output to autostereoscopic monitors (Sharp) I previously
output in interlaced form. Since all viewing and projection systems and formats are changing, I
will simply say tht these formats will be determined by the end result
and what system it is intended to run on. RZ: What format or codec? MPEG-4? H.264? XVID? ffdshow? Other? 8bit?
16bit? 32bit? AR: all Mpeg formats, avi, h.264, or others that are available and useful. RZ: How have you looked at your finished movie in 3D? Laptop? Projection?
3DTV? 3D Theater? AR: I've looked at the finished work on laptop in anaglyph, autostereoscopic
3D monitor, in projection with passive polarized glasses on small and
large screens in 3D. RZ: What did you have to do to look at your finished movie in 3D? AR: Buy the right technology required to view the work in 3D. In some
cases it was borrowed, in some cases the gallery or theater provided the
technology, but most often I had to buy it, and am currently looking at
buying a 3DTV plasma and certainly a DLP for fun and comparisons. In the old days (90’s) we had to look at 3D movies on a CRT interlaced
NTSC monitor using LC flicker glasses. The flicker was annoying, the glasses
a turn off, and a lot of discomfort occurred due to the glasses themselves.
Later designs with wireless and frequency multiplied active eyewear improved
the viewing experience. My 6 year old autostereo Sharp is pretty awful in comparison to current
autostereo screens, and I think the present autostereo monitors which
I have seen are still inferior to the stereo quality of active glasses
and 3DTV. I like my work best when I project it with two matched video projectors
on a large silver screen with passive polarized glasses. But when I can
afford a large (over 60”) 3D TV with active eyewear, I’m sure I’ll change
my mind and go for the portableTV. RZ: How did you have to format your 3D movie to project it in stereo?
AR: With analog field-sequential 3D, I had to buy a de-multiplexer (from
Andrew Woods) and project the discrete left and right eye channels using
a variety (Sharp, Barco) of matched video projectors. That produced better
results than using a single DLP (Micropol) VRex projector which I saw
employed by SCSC. I’m about ready to find out how my current HD works play when I use
a Panasonic Plasma 3DTV and two matched video projectors in October. RZ: What display process? Anaglyph? Linear Polarizer/silverscreen? RealD?
Dolby Digital 3D? ExpanD? Autostereoscopic? AR: I have used Anaglyph red-cyan, Linear Polarizer/silverscreen, Field-alternating
LCS glasses, and Autostereoscopic displays for presentations. RZ: What did you learn about production and 3D by making your 3D movie?
AR: That the process of creation is never finished, and undergoes continued
change in our networked and shared technical environments. That means
no one is looking at analog 3D much these days, except collectors and
clubs, or some art festivals. HD 3D and desktop publishing are here and getting better. Output to
3D Blu-ray not analog tape machine. As one technology replaces another, there is a future technology called
'holographics' which is beyond stereoscopic 3D of today. Its orders of
magnatude rise in resolution requirements, its method of recording all
'eyes' from any given vantage point, or computer generated scene (3D with
full parallax) means that this upcoming technology will also and partially
replace stereoscopic 3D TV and Theaters. The arts I'm interested in and practicing are not the determinations
of whatever 3D technology there is at the time. I've already done stereo
and holographic 3D through the decades beginning in the 1970's. Those
'tech typewriters' of the times didn't create the 'novel' which was my
art. The 'imagining' of the works through art and technical means
was the reason those 'typewriters' were employed or self-created. I've learned that production and 3D is a function of what one wishes
to 'make'. If it is big-budget, crew and latest equipment, known actors,
and general-public screenplay, then it takes one direction to satisfy
the millions of dollars of investment. If it is a commercial job for a
client then the clients have determined how much and to whom this work
is directed and production capabilities as home desktop level have entered
the picture. If it is a work that is deemed interesting to the 'art' culture
and market, or if it is a work that is purely 'personal' and to be shown
amongst friends, then the production process becomes completely adaptive,
sometimes innovative, and the 3D is stretched and explored 'to the limits'.
RZ: What would you like to do next with 3D movies? AR: Make them interactive with the audience (as in my SCAT! project)
as a game of narratives, and participate in, and see, holographic 3D movies
in beyond HD systems, a holographic 3D TV that replaces the 3DTV home
theater of today. I have a lot of short subject 3D movie projects in mind, continue working
on securing financing for SCAT!, move from stereoscopic to holographic
works and exhibitions, and have that 'un-retired' impetus to keep working
and avoid starvation. The projects range from 3D video art short subjects
which I am currently doing, documentaries of the natural world/creatures
which I hope to get funding for, continuing 3D documentaries on holographic
arts, to 3D feature films, and interactive 3D film presentations of a
concept and script that I have been developing for over 10 years. As a free lancer I have to make my work when I can and when there is
either a client or my financial circumstances permit. Either way, I know
that 3D movies today will be replaced by future full holographic 3D movies
delivered to the home, workplace, gallery, internet, and theaters in the
future. I hope to make my art compatible with both the present and the
future. RZ: Why? AR: It’s in my imagination, in my aspirations, and in my daily life.
And, as we say in surfing or in basketball, it’s in my ‘blood’. It would
be nice to do it ‘all the time’, but 3D has to defer to life and family
and other art interests. All I can do is create more, when possible, and
enjoy the rest. |